7 research outputs found

    Online Robot Introspection via Wrench-based Action Grammars

    Full text link
    Robotic failure is all too common in unstructured robot tasks. Despite well-designed controllers, robots often fail due to unexpected events. How do robots measure unexpected events? Many do not. Most robots are driven by the sense-plan act paradigm, however more recently robots are undergoing a sense-plan-act-verify paradigm. In this work, we present a principled methodology to bootstrap online robot introspection for contact tasks. In effect, we are trying to enable the robot to answer the question: what did I do? Is my behavior as expected or not? To this end, we analyze noisy wrench data and postulate that the latter inherently contains patterns that can be effectively represented by a vocabulary. The vocabulary is generated by segmenting and encoding the data. When the wrench information represents a sequence of sub-tasks, we can think of the vocabulary forming a sentence (set of words with grammar rules) for a given sub-task; allowing the latter to be uniquely represented. The grammar, which can also include unexpected events, was classified in offline and online scenarios as well as for simulated and real robot experiments. Multiclass Support Vector Machines (SVMs) were used offline, while online probabilistic SVMs were are used to give temporal confidence to the introspection result. The contribution of our work is the presentation of a generalizable online semantic scheme that enables a robot to understand its high-level state whether nominal or abnormal. It is shown to work in offline and online scenarios for a particularly challenging contact task: snap assemblies. We perform the snap assembly in one-arm simulated and real one-arm experiments and a simulated two-arm experiment. This verification mechanism can be used by high-level planners or reasoning systems to enable intelligent failure recovery or determine the next most optima manipulation skill to be used.Comment: arXiv admin note: substantial text overlap with arXiv:1609.0494

    Spatiotemporal transcriptomic atlas of mouse organogenesis using DNA nanoball-patterned arrays.

    Get PDF
    Spatially resolved transcriptomic technologies are promising tools to study complex biological processes such as mammalian embryogenesis. However, the imbalance between resolution, gene capture, and field of view of current methodologies precludes their systematic application to analyze relatively large and three-dimensional mid- and late-gestation embryos. Here, we combined DNA nanoball (DNB)-patterned arrays and in situ RNA capture to create spatial enhanced resolution omics-sequencing (Stereo-seq). We applied Stereo-seq to generate the mouse organogenesis spatiotemporal transcriptomic atlas (MOSTA), which maps with single-cell resolution and high sensitivity the kinetics and directionality of transcriptional variation during mouse organogenesis. We used this information to gain insight into the molecular basis of spatial cell heterogeneity and cell fate specification in developing tissues such as the dorsal midbrain. Our panoramic atlas will facilitate in-depth investigation of longstanding questions concerning normal and abnormal mammalian development.This work is part of the ‘‘SpatioTemporal Omics Consortium’’ (STOC) paper package. A list of STOC members is available at: http://sto-consortium.org. We would like to thank the MOTIC China Group, Rongqin Ke (Huaqiao University, Xiamen, China), Jiazuan Ni (Shenzhen University, Shenzhen, China), Wei Huang (Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai, China), and Jonathan S. Weissman (Whitehead Institute, Boston, USA) for their help. This work was supported by the grant of Top Ten Foundamental Research Institutes of Shenzhen, the Shenzhen Key Laboratory of Single-Cell Omics (ZDSYS20190902093613831), and the Guangdong Provincial Key Laboratory of Genome Read and Write (2017B030301011); Longqi Liu was supported by the National Natural Science Foundation of China (31900466) and Miguel A. Esteban’s laboratory at the Guangzhou Institutes of Biomedicine and Health by the Strategic Priority Research Program of the Chinese Academy of Sciences (XDA16030502), National Natural Science Foundation of China (92068106), and the Guangdong Basic and Applied Basic Research Foundation (2021B1515120075).S

    SPPNet: A Single-Point Prompt Network for Nuclei Image Segmentation

    No full text
    Image segmentation plays an essential role in nuclei image analysis. Recently, the segment anything model has made a significant breakthrough in such tasks. However, the current model exists two major issues for cell segmentation: (1) the image encoder of the segment anything model involves a large number of parameters. Retraining or even fine-tuning the model still requires expensive computational re- sources. (2) in point prompt mode, points are sampled from the centre of the ground truth and more than one set of points is expected to achieve reliable performance, which is not efficient for practical applications. In this paper, a single-point prompt network is proposed for nuclei image segmentation, called SPPNet. We replace the original image encoder with a lightweight vision transformer. Also, an effective convolutional block is added in parallel to extract the low-level semantic information from the image and compensate for the performance degradation due to the small image encoder. We propose a new point-sampling method based on the Gaussian kernel. The proposed model is evaluated on the MoNuSeg-2018 dataset. The result demonstrated that SPPNet outperforms existing U-shape architectures and shows faster convergence in training. Compared to the segment anything model, SPPNet shows roughly 20 times faster inference, with 1/70 parameters and computational cost. Particularly, only one set of points is required in both the training and inference phases, which is more reasonable for clinical applications. The code for our work and more technical details can be found at https://github.com/xq141839/SPPNet
    corecore